Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The challenge of optimizing personalized learning pathways to maximize student engagement and minimize task completion time while adhering to prerequisite constraints remains a significant issue in educational technology. This paper applies the Salp Swarm Algorithm (SSA) as a new solution to this problem. Our approach compares SSA against traditional optimization techniques such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The results demonstrate that SSA significantly outperforms these methods, achieving a lower average fitness value of 307.0 compared to 320.0 for GA and 315.0 for PSO. Furthermore, SSA exhibits greater consistency, with a lower standard deviation and superior computational efficiency, as evidenced by faster execution times. The success of SSA is attributed to its balanced approach to exploration and exploitation within the search space. These findings highlight the potential of SSA as an effective tool for optimizing personalized learning experiencesmore » « lessFree, publicly-accessible full text available May 7, 2026
-
Free, publicly-accessible full text available February 2, 2026
-
Predicting valence and arousal values from EEG signals has been a steadfast research topic within the field of affective computing or emotional AI. Although numerous valid techniques to predict valence and arousal values from EEG signals have been established and verified, the EEG data collection process itself is relatively undocumented. This creates an artificial learning curve for new researchers seeking to incorporate EEGs within their research workflow. In this article, a study is presented that illustrates the importance of a strict EEG data collection process for EEG affective computing studies. The work was evaluated by first validating the effectiveness of a machine learning prediction model on the DREAMER dataset, then showcasing the lack of effectiveness of the same machine learning prediction model on cursorily obtained EEG data.more » « less
-
Cloud computing is a concept introduced in the information technology era, with the main components being the grid, distributed, and valuable computing. The cloud is being developed continuously and, naturally, comes up with many challenges, one of which is scheduling. A schedule or timeline is a mechanism used to optimize the time for performing a duty or set of duties. A scheduling process is accountable for choosing the best resources for performing a duty. The main goal of a scheduling algorithm is to improve the efficiency and quality of the service while at the same time ensuring the acceptability and effectiveness of the targets. The task scheduling problem is one of the most important NP-hard issues in the cloud domain and, so far, many techniques have been proposed as solutions, including using genetic algorithms (GAs), particle swarm optimization, (PSO), and ant colony optimization (ACO). To address this problem, in this paper one of the collective intelligence algorithms, called the Salp Swarm Algorithm (SSA), has been expanded, improved, and applied. The performance of the proposed algorithm has been compared with that of GAs, PSO, continuous ACO, and the basic SSA. The results show that our algorithm has generally higher performance than the other algorithms. For example, compared to the basic SSA, the proposed method has an average reduction of approximately 21% in makespan.more » « less
-
In many smart city projects, a common choice to capture spatial information is the inclusion of lidar data, but this decision will often invoke severe growing pains within the existing infrastructure. In this article, the authors introduce a data pipeline that orchestrates Apache NiFi (NiFi), Apache MiNiFi (MiNiFi), and several other tools as an automated solution to relay and archive lidar data captured by deployed edge devices. The lidar sensors utilized within this workflow are Velodyne Ultra Puck sensors that produce 6-7 GB packet capture (PCAP) files per hour. By both compressing the file after capturing it and compressing the file in real-time; it was discovered that GZIP and XZ both saved considerable file size being from 2-5 GB, 5 minutes in transmission time, and considerable CPU time. To evaluate the capabilities of the system design, the features of this data pipeline were compared against existing third-party services, Globus and RSync.more » « less
-
Abstract. This paper studies how to improve the accuracy of hydrologic models using machine-learning models as post-processors and presents possibilities to reduce the workload to create an accurate hydrologic model by removing the calibration step. It is often challenging to develop an accurate hydrologic model due to the time-consuming model calibration procedure and the nonstationarity of hydrologic data. Our findings show that the errors of hydrologic models are correlated with model inputs. Thus motivated, we propose a modeling-error-learning-based post-processor framework by leveraging this correlation to improve the accuracy of a hydrologic model. The key idea is to predict the differences (errors) between the observed values and the hydrologic model predictions by using machine-learning techniques. To tackle the nonstationarity issue of hydrologic data, a moving-window-based machine-learning approach is proposed to enhance the machine-learning error predictions by identifying the local stationarity of the data using a stationarity measure developed based on the Hilbert–Huang transform. Two hydrologic models, the Precipitation–Runoff Modeling System (PRMS) and the Hydrologic Modeling System (HEC-HMS), are used to evaluate the proposed framework. Two case studies are provided to exhibit the improved performance over the original model using multiple statistical metrics.more » « less
An official website of the United States government
